501 research outputs found

    Convergence Rate Analysis for Optimal Computing Budget Allocation Algorithms

    Full text link
    Ordinal optimization (OO) is a widely-studied technique for optimizing discrete-event dynamic systems (DEDS). It evaluates the performance of the system designs in a finite set by sampling and aims to correctly make ordinal comparison of the designs. A well-known method in OO is the optimal computing budget allocation (OCBA). It builds the optimality conditions for the number of samples allocated to each design, and the sample allocation that satisfies the optimality conditions is shown to asymptotically maximize the probability of correct selection for the best design. In this paper, we investigate two popular OCBA algorithms. With known variances for samples of each design, we characterize their convergence rates with respect to different performance measures. We first demonstrate that the two OCBA algorithms achieve the optimal convergence rate under measures of probability of correct selection and expected opportunity cost. It fills the void of convergence analysis for OCBA algorithms. Next, we extend our analysis to the measure of cumulative regret, a main measure studied in the field of machine learning. We show that with minor modification, the two OCBA algorithms can reach the optimal convergence rate under cumulative regret. It indicates the potential of broader use of algorithms designed based on the OCBA optimality conditions

    Characterization of a high efficiency silicon photomultiplier for millisecond to sub-microsecond astrophysical transient searches

    Full text link
    We characterized the S14160-3050HS Multi-Pixel Photon Counter (MPPC), a high efficiency, single channel silicon photomultiplier manufactured by Hamamatsu Photonics K.K. All measurements were performed at a room temperature of (23.0 ±\pm 0.3) ^{\circ}C. We obtained an I-V curve and used relative derivatives to find a breakdown voltage of 38.88 V. At a 3 V over voltage, we find a dark count rate of 1.08 MHz, crosstalk probability of 21 %\%, photon detection efficiency of 55 %\% at 450 nm, and saturation at 1.0x1011^{11} photons per second. The S14160-3050HS MPPC is a candidate detector for the Ultra-Fast Astronomy (UFA) telescope which will characterize the optical (320 nm - 650 nm) sky in the millisecond to sub-microsecond timescales using two photon counting arrays operated in coincidence on the 0.7 meter Nazarbayev University Transient Telescope at the Assy-Turgen Astrophysical Observatory (NUTTelA-TAO) located near Almaty, Kazakhstan. We discuss advantages and disadvantages of using the S14160-3050HS MPPC for the UFA telescope and future ground-based telescopes in sub-second time domain astrophysics.Comment: 7 pages, 6 figures, 1 table, submitted to SPIE Astronomical Telescopes + Instrumentation 2020 conference proceeding

    Multiple Instance Curriculum Learning for Weakly Supervised Object Detection

    Full text link
    When supervising an object detector with weakly labeled data, most existing approaches are prone to trapping in the discriminative object parts, e.g., finding the face of a cat instead of the full body, due to lacking the supervision on the extent of full objects. To address this challenge, we incorporate object segmentation into the detector training, which guides the model to correctly localize the full objects. We propose the multiple instance curriculum learning (MICL) method, which injects curriculum learning (CL) into the multiple instance learning (MIL) framework. The MICL method starts by automatically picking the easy training examples, where the extent of the segmentation masks agree with detection bounding boxes. The training set is gradually expanded to include harder examples to train strong detectors that handle complex images. The proposed MICL method with segmentation in the loop outperforms the state-of-the-art weakly supervised object detectors by a substantial margin on the PASCAL VOC datasets.Comment: Published in BMVC 201

    Convergence Analysis of Stochastic Kriging-Assisted Simulation with Random Covariates

    Full text link
    We consider performing simulation experiments in the presence of covariates. Here, covariates refer to some input information other than system designs to the simulation model that can also affect the system performance. To make decisions, decision makers need to know the covariate values of the problem. Traditionally in simulation-based decision making, simulation samples are collected after the covariate values are known; in contrast, as a new framework, simulation with covariates starts the simulation before the covariate values are revealed, and collects samples on covariate values that might appear later. Then, when the covariate values are revealed, the collected simulation samples are directly used to predict the desired results. This framework significantly reduces the decision time compared to the traditional way of simulation. In this paper, we follow this framework and suppose there are a finite number of system designs. We adopt the metamodel of stochastic kriging (SK) and use it to predict the system performance of each design and the best design. The goal is to study how fast the prediction errors diminish with the number of covariate points sampled. This is a fundamental problem in simulation with covariates and helps quantify the relationship between the offline simulation efforts and the online prediction accuracy. Particularly, we adopt measures of the maximal integrated mean squared error (IMSE) and integrated probability of false selection (IPFS) for assessing errors of the system performance and the best design predictions. Then, we establish convergence rates for the two measures under mild conditions. Last, these convergence behaviors are illustrated numerically using test examples
    corecore